首页> 外文OA文献 >Online and stochastic Douglas-Rachford splitting method for large scale machine learning
【2h】

Online and stochastic Douglas-Rachford splitting method for large scale machine learning

机译:在线和随机Douglas-Rachford分裂方法的大规模   机器学习

摘要

Online and stochastic learning has emerged as powerful tool in large scaleoptimization. In this work, we generalize the Douglas-Rachford splitting (DRs)method for minimizing composite functions to online and stochastic settings (toour best knowledge this is the first time DRs been generalized to sequentialversion). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRsmethod. Then we proved that the online DRs splitting method enjoy an $O(1)$regret bound and stochastic DRs splitting has a convergence rate of$O(1/\sqrt{T})$. The proof is simple and intuitive, and the results andtechnique can be served as a initiate for the research on the large scalemachine learning employ the DRs method. Numerical experiments of the proposedmethod demonstrate the effectiveness of the online and stochastic update rule,and further confirm our regret and convergence analysis.
机译:在线和随机学习已成为大规模优化中的强大工具。在这项工作中,我们推广了Douglas-Rachford拆分(DR)方法,以将组合函数最小化为在线和随机设置(据我们所知,这是首次将DR推广到顺序版本)。我们首先为批处理DRs方法建立$ O(1 / \ sqrt {T})$后悔。然后,我们证明了在线DR拆分方法具有$ O(1)$ regret约束,并且随机DR拆分的收敛速度为$ O(1 / \ sqrt {T})$。证明是简单直观的,其结果和技术可以作为采用DRs方法进行大规模机器学习研究的开端。所提方法的数值实验证明了在线和随机更新规则的有效性,进一步证实了我们的遗憾和收敛性分析。

著录项

  • 作者

    Shi, Ziqiang; Liu, Rujie;

  • 作者单位
  • 年度 2016
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号